Temporal-Frequency Co-training for Time Series Semi-supervised Learning

نویسندگان

چکیده

Semi-supervised learning (SSL) has been actively studied due to its ability alleviate the reliance of deep models on labeled data. Although existing SSL methods based pseudo-labeling strategies have made great progress, they rarely consider time-series data's intrinsic properties (e.g., temporal dependence). Learning representations by mining inherent time series recently gained much attention. Nonetheless, how utilize feature design paradigms for not explored. To this end, we propose a Time Series framework via Temporal-Frequency Co-training (TS-TFC), leveraging complementary information from two distinct views unlabeled data learning. In particular, TS-TFC employs time-domain and frequency-domain train neural networks simultaneously, each view's pseudo-labels generated label propagation in representation space are adopted guide training other classifier. enhance discriminative between categories, temporal-frequency supervised contrastive module, which integrates difficulty categories improve quality pseudo-labels. Through co-training obtained representations, is exploited enable model better learn distribution categories. Extensive experiments 106 UCR datasets show that outperforms state-of-the-art methods, demonstrating effectiveness robustness our proposed model.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Semi-Supervised Learning of Alternative Splicing Events Using Co-Training

Alternative splicing is a phenomenon that gives rise to multiple mRNA transcripts from a single gene. It is believed that a large number of genes undergoes alternative splicing. Predicting alternative splicing events is a problem of great interest to biologists, as it can help them to understand transcript diversity. Supervised machine learning approaches can be used to predict alternative spli...

متن کامل

Semi-supervised Co-training Algorithm Based on Assisted Learning

The classification performance of the learner is weakened when unlabeled examples are mislabeled during co-training process. A semisupervised co-training algorithm based on assisted learning (AR-Tri-training) was proposed. Firstly, the assisted learning strategy was presented, which is combined with rich information strategy for designing the assisted learner. Secondly, the evaluation factor wa...

متن کامل

Temporal Ensembling for Semi-Supervised Learning

In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augm...

متن کامل

Semi-Supervised Regression with Co-Training

In many practical machine learning and data mining applications, unlabeled training examples are readily available but labeled ones are fairly expensive to obtain. Therefore, semi-supervised learning algorithms such as co-training have attracted much attention. Previous research mainly focuses on semi-supervised classification. In this paper, a co-training style semi-supervised regression algor...

متن کامل

Supervised Learning and Co-training

Co-training under the Conditional Independence Assumption is among the models which demonstrate how radically the need for labeled data can be reduced if a huge amount of unlabeled data is available. In this paper, we explore how much credit for this saving must be assigned solely to the extra-assumptions underlying the Co-training model. To this end, we compute general (almost tight) upper and...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i7.26072